78 research outputs found

    Overcoming perceptual features in logical reasoning: an event-related potentials study

    Get PDF
    It is more difficult for reasoners to detect that the letter-number pair H7 verifies the conditional rule If there is not a T then there is not a 4 than to detect that it verifies the rule If there is an H then there is a 7. In prior work [Prado, J., & Noveck, I. A. (2007). Overcoming perceptual features in logical reasoning: a parametric functional magnetic resonance imaging study. Journal of Cognitive Neuroscience 19(4), 642-657], we argued that this difficulty was due to mismatching effects, i.e. perceptual mismatches that arise when the items mentioned in the rule (e.g. T and 4) mismatch those presented in the test-pair (H and 7). The present study aimed to test this claim directly by recording ERPs while participants evaluated conditional rules in the presence or absence of mismatches. We found that mismatches, not only trigger a frontocentral N2 (an ERP known to be related to perceptual mismatch) but that they, parametrically modulate its amplitude (e.g. two mismatches prompt a greater N2 than one). Our results indicate that the main role of negations in conditional rules is to focus attention on the negated constituent but also suggest that there is some inter-individual differences in the way participants apprehend such negations, as indicated by a correlation between N2 amplitude and participants' reaction times. Overall, these findings emphasize how overcoming perceptual features plays a role in the mismatching effect and extend the mismatch-related effects of the N2 into a reasoning task

    Action relevance in linguistic context drives word-induced motor activity

    Get PDF
    Many neurocognitive studies on the role of motor structures in action-language processing have implicitly adopted a “dictionary-like” framework within which lexical meaning is constructed on the basis of an invariant set of semantic features. The debate has thus been centered on the question of whether motor activation is an integral part of the lexical semantics (embodied theories) or the result of a post-lexical construction of a situation model (disembodied theories). However, research in psycholinguistics show that lexical semantic processing and context-dependent meaning construction are narrowly integrated. An understanding of the role of motor structures in action-language processing might thus be better achieved by focusing on the linguistic contexts under which such structures are recruited. Here, we therefore analyzed online modulations of grip force while subjects listened to target words embedded in different linguistic contexts. When the target word was a hand action verb and when the sentence focused on that action (John signs the contract) an early increase of grip force was observed. No comparable increase was detected when the same word occurred in a context that shifted the focus toward the agent's mental state (John wants to sign the contract). There mere presence of an action word is thus not sufficient to trigger motor activation. Moreover, when the linguistic context set up a strong expectation for a hand action, a grip force increase was observed even when the tested word was a pseudo-verb. The presence of a known action word is thus not required to trigger motor activation. Importantly, however, the same linguistic contexts that sufficed to trigger motor activation with pseudo-verbs failed to trigger motor activation when the target words were verbs with no motor action reference. Context is thus not by itself sufficient to supersede an “incompatible” word meaning. We argue that motor structure activation is part of a dynamic process that integrates the lexical meaning potential of a term and the context in the online construction of a situation model, which is a crucial process for fluent and efficient online language comprehension

    Grip Force Reveals the Context Sensitivity of Language-Induced Motor Activity during “Action Words

    Get PDF
    Studies demonstrating the involvement of motor brain structures in language processing typically focus on \ud time windows beyond the latencies of lexical-semantic access. Consequently, such studies remain inconclusive regarding whether motor brain structures are recruited directly in language processing or through post-linguistic conceptual imagery. In the present study, we introduce a grip-force sensor that allows online measurements of language-induced motor activity during sentence listening. We use this tool to investigate whether language-induced motor activity remains constant or is modulated in negative, as opposed to affirmative, linguistic contexts. Our findings demonstrate that this simple experimental paradigm can be used to study the online crosstalk between language and the motor systems in an ecological and economical manner. Our data further confirm that the motor brain structures that can be called upon during action word processing are not mandatorily involved; the crosstalk is asymmetrically\ud governed by the linguistic context and not vice versa

    MĂ©thodes de localisation des gĂ©nĂ©rateurs de l’activitĂ© Ă©lectrique cĂ©rĂ©brale Ă  partir de signaux Ă©lectro- et magnĂ©to-encĂ©phalographiques.

    No full text
    This work presents numerical evaluations of several dipole localization approaches from electroencephalographic (EEG) and magnetoencephalographic (MEG) recordings. First of all, the intrinsic accuracy of realistic modeling with the boundary element method (collocation) was systematically evaluated. Using a linear interpolation of the potential on each mesh triangle rather than a constant interpolation provided only a slight improvement of the EEG inverse problem with the same computing cost. Next, the localization bias introduced by the classical spherical model was then quantified. This bias was found to range from 2.5 mm in the upper part of the head to 8 mm in the lower part of the head. Then it was shown that, contrarily to spherical models, realistic models could retrieve dipole orientation with less than 20◩ error, even for radial orientations and noisy data. Several techniques for combining EEG and MEG in a single inverse problem were finally evaluated on simulated data with spatially correlated noise. All these coupling techniques were found to provide a better or equal localization accuracy compared to the best of either modality, even when using few electrodes. Combining MEG and EEG with a realistic model and the boundary element method thus produces a robust method for localizing brain electrical activity.Cette thĂšse prĂ©sente des Ă©valuations par simulation numĂ©rique de plusieurs approches de localisation des gĂ©nĂ©rateurs de l’activitĂ© Ă©lectrique cĂ©rĂ©brale modĂ©lisĂ©s par des dipĂŽles de courant Ă©quivalents Ă  partir de donnĂ©es Ă©lectroencĂ©phalographiques (EEG) et magnĂ©toencĂ©phalographiques (MEG). Nous avons tout d’abord Ă©valuĂ© systĂ©matiquement la prĂ©cision intrinsĂšque des modĂšles Ă  gĂ©omĂ©trie rĂ©aliste avec la mĂ©thode des intĂ©grales de surface (collocation). Nous avons montrĂ© que l’utilisation d’une interpolation linĂ©aire plutĂŽt que constante du potentiel sur chaque triangle du maillage ne permettait qu’une lĂ©gĂšre amĂ©lioration de la prĂ©cision lors de la rĂ©solution du problĂšme direct EEG pour un temps de calcul identique. Ensuite, nous avons quantifiĂ© le biais de localisation introduit par l’utilisation du modĂšle sphĂ©rique classique en MEG. Nous avons trouvĂ© un biais allant de 2.5 mm en haut de la tĂȘte Ă  8 mm en bas de la tĂȘte. Nous avons Ă©galement mis en Ă©vidence que les modĂšles rĂ©alistes, contrairement aux modĂšles sphĂ©riques, permettent de retrouver l’orientation des sources (avec une erreur infĂ©rieure Ă  20◩) mĂȘme lorsqu’elles sont radiales et mĂȘme en prĂ©sence de bruit. Enfin, diffĂ©rentes mĂ©thodes de couplage des donnĂ©es MEG et EEG dans un mĂȘme problĂšme inverse ont Ă©galement Ă©tĂ© Ă©valuĂ©es sur des donnĂ©es simulĂ©es avec un bruit rĂ©aliste corrĂ©lĂ© spatialement. Nous avons montrĂ© que ce couplage permettait d’aboutir Ă  une prĂ©cision supĂ©rieure ou Ă©gale Ă  la meilleure des deux modalitĂ©s, et ce mĂȘme avec un petit nombre d’électrodes. Le couplage MEG/EEG dans un modĂšle rĂ©aliste avec la mĂ©thode des intĂ©grales de surface permet donc une localisation plus robuste de l’activitĂ© Ă©lectrique cĂ©rĂ©brale

    MĂ©thodes de localisation des gĂ©nĂ©rateurs de l’activitĂ© Ă©lectrique cĂ©rĂ©brale Ă  partir de signaux Ă©lectro- et magnĂ©to-encĂ©phalographiques.

    No full text
    This work presents numerical evaluations of several dipole localization approaches from electroencephalographic (EEG) and magnetoencephalographic (MEG) recordings. First of all, the intrinsic accuracy of realistic modeling with the boundary element method (collocation) was systematically evaluated. Using a linear interpolation of the potential on each mesh triangle rather than a constant interpolation provided only a slight improvement of the EEG inverse problem with the same computing cost. Next, the localization bias introduced by the classical spherical model was then quantified. This bias was found to range from 2.5 mm in the upper part of the head to 8 mm in the lower part of the head. Then it was shown that, contrarily to spherical models, realistic models could retrieve dipole orientation with less than 20◩ error, even for radial orientations and noisy data. Several techniques for combining EEG and MEG in a single inverse problem were finally evaluated on simulated data with spatially correlated noise. All these coupling techniques were found to provide a better or equal localization accuracy compared to the best of either modality, even when using few electrodes. Combining MEG and EEG with a realistic model and the boundary element method thus produces a robust method for localizing brain electrical activity.Cette thĂšse prĂ©sente des Ă©valuations par simulation numĂ©rique de plusieurs approches de localisation des gĂ©nĂ©rateurs de l’activitĂ© Ă©lectrique cĂ©rĂ©brale modĂ©lisĂ©s par des dipĂŽles de courant Ă©quivalents Ă  partir de donnĂ©es Ă©lectroencĂ©phalographiques (EEG) et magnĂ©toencĂ©phalographiques (MEG). Nous avons tout d’abord Ă©valuĂ© systĂ©matiquement la prĂ©cision intrinsĂšque des modĂšles Ă  gĂ©omĂ©trie rĂ©aliste avec la mĂ©thode des intĂ©grales de surface (collocation). Nous avons montrĂ© que l’utilisation d’une interpolation linĂ©aire plutĂŽt que constante du potentiel sur chaque triangle du maillage ne permettait qu’une lĂ©gĂšre amĂ©lioration de la prĂ©cision lors de la rĂ©solution du problĂšme direct EEG pour un temps de calcul identique. Ensuite, nous avons quantifiĂ© le biais de localisation introduit par l’utilisation du modĂšle sphĂ©rique classique en MEG. Nous avons trouvĂ© un biais allant de 2.5 mm en haut de la tĂȘte Ă  8 mm en bas de la tĂȘte. Nous avons Ă©galement mis en Ă©vidence que les modĂšles rĂ©alistes, contrairement aux modĂšles sphĂ©riques, permettent de retrouver l’orientation des sources (avec une erreur infĂ©rieure Ă  20◩) mĂȘme lorsqu’elles sont radiales et mĂȘme en prĂ©sence de bruit. Enfin, diffĂ©rentes mĂ©thodes de couplage des donnĂ©es MEG et EEG dans un mĂȘme problĂšme inverse ont Ă©galement Ă©tĂ© Ă©valuĂ©es sur des donnĂ©es simulĂ©es avec un bruit rĂ©aliste corrĂ©lĂ© spatialement. Nous avons montrĂ© que ce couplage permettait d’aboutir Ă  une prĂ©cision supĂ©rieure ou Ă©gale Ă  la meilleure des deux modalitĂ©s, et ce mĂȘme avec un petit nombre d’électrodes. Le couplage MEG/EEG dans un modĂšle rĂ©aliste avec la mĂ©thode des intĂ©grales de surface permet donc une localisation plus robuste de l’activitĂ© Ă©lectrique cĂ©rĂ©brale

    Méthodes de localisation des générateurs de l'activité électrique cérébrale à partir de signaux électro-et magnéto-encéphalographiques

    No full text
    Cette thĂšse prĂ©sente des Ă©valuations par simulation numĂ©rique de plusieurs approches de localisation des gĂ©nĂ©rateurs de l'activitĂ© Ă©lectrique cĂ©rĂ©brale modĂ©lises par des dipĂŽles de courant Ă©quivalents a partir de donnĂ©es Ă©lectro-encĂ©phalographiques (EEG) et magnĂ©to-encĂ©phalographiques (MEG). Nous avons tout d'abord Ă©value systĂ©matiquement la prĂ©cision intrinsĂšques des modĂšles a gĂ©omĂ©trie rĂ©aliste avec la mĂ©thode des intĂ©grales de surface (collocation). Nous avons montre que l'utilisation d'une interpolation linĂ©aire plutĂŽt que constante du potentiel sur chaque triangle du maillage ne permettait qu'une lĂ©gĂšre amĂ©lioration de la prĂ©cision lors de la rĂ©solution du problĂšme direct EEG pour un temps de calcul identique. Ensuite, nous avons quantifie le biais de localisation introduit par l'utilisation du modĂšle sphĂ©rique classique en MEG. Nous avons trouve un biais allant de 2.5 mm en haut de la tĂȘte a 8 mm en bas de la tĂȘte. Nous avons Ă©galement mis en Ă©vidence que les modĂšles rĂ©alistes, contrairement aux modĂšles sphĂ©riques, - permettent de retrouver l'orientation des sources (avec une erreur inferieure a 20 degrĂ©s) mĂȘme lorsqu'elles sont radiales et mĂȘme en prĂ©sence de bruit. Enfin, diffĂ©rentes mĂ©thodes de couplage des donnĂ©es MEG et EEG dans un mĂȘme problĂšme inverse ont Ă©galement Ă©tĂ© Ă©valuĂ©es sur des donnĂ©es simulĂ©es avec un bruit rĂ©aliste corrĂšle spatialement. Nous avons montre que ce couplage permettait d'aboutir a une prĂ©cision supĂ©rieure ou Ă©gale a la meilleure des deux modalitĂ©s, et ce mĂȘme avec un petit nombre d'Ă©lectrodes. Le couplage MEG/EEG dans un modĂšle rĂ©aliste avec la mĂ©thode des intĂ©grales de surface permet donc une localisation plus robuste de l'activitĂ© Ă©lectrique cĂ©rĂ©brale.This work presents numerical evaluations of several dipole localization approaches from electroencephalographic (EEG) and magnetoencephalographic (MEG) recordings. First of all, the intrinsic accuracy of realistic modeling with the boundary element method (collocation) was systematically evaluated. Using a linear interpolation of the potential on each mesh triangle rather than a constant interpolation provided only a slight improvement of the EEG inverse problem with the same computing cost. Next, the localization bias introduced by the classical spherical model was then quantified. This bias was found to range from 2.5 mm in the upper part of the head to 8 mm in the lower part of the head. Then it was shown that, contrarily to spherical models, realistic models could retrieve dipole orientation with less than 20 degree error, even for radial orientations and noisy data. Several techniques for combining EEG and MEG in a single inverse problem were finally evaluated on simulated data with spatially correlated noise. All these coupling techniques were found to provide better or equal localization accuracy compared to the best of either modality, even when using few electrodes. Combining MEG and EEG with a realistic model and the boundary element method thus produces a robust method for localizing brain electrical activity.VILLEURBANNE-DOC'INSA LYON (692662301) / SudocSudocFranceF

    When and How is Concord preferred? An Experimental approach

    No full text
    ISBN : 978-2-8399-1580-9.International audienceA longstanding debate asks whether negative polarity (1a.NPI) and negative concord (1b.NC) involve identical or distinct syntactic/semantic operations. Although French (1a&b) and cross-linguistic equivalents share the same first-order-logic interpretation (2a), disagreements remain as to how it obtains for each. Yet only (1b), ambiguously allows a double negative (DN)(2b). Taking the English paraphrase in (2b) to be likewise ambiguous, May (89) proposed that DN encodes a compositional hierarchical scope relation between its negative quantifiers (3a), while NC involves the formation of a resumptive polyadic negative quantifier (3b). Applied to French, this analysis of NC has long ranging consequences. First, NC is clearly distinguished from NPI, as n-words are cast as negative quantifiers. Second, it puts French (1b) and English (2b) under the same theoretical umbrella, questioning the validity of any NC macro-parameter. Third, how (1b) and (2b) should be distinguished arises anew, particularly if French truly favors NC, but English DN. Indeed, although the analysis elegantly allows both readings for (1b)-(2b) without lexical ambiguity, it remains surprisingly vague as to which factors favors NC over DN. Processing costs, intonation, quantifier parallelism, structural complexity, clause boundedness and discourse have all been suggested to influence DN or NC, but in effect, little is known as to how speakers resolve such ambiguities in a single language. The paper explores this question experimentally. Ambiguous sentences like (1b) were paired with two scenes, each representing one reading. Subjects were asked to read them aloud and pick the one that representing its meaning. Sound production and choice time through mouse tracking were recorded. Quantifier parallelism, structural complexity and syntactic position were manipulated to probe their effects on interpretation. The design produces experimental data on the effects of quantificational parallelism, structural complexity, syntactic position, and processing time for NC vs. DN and intonation. On a resumptive quantification analysis of NC, theoretical predictions are as follows: A) parallel simple Pro-Pro structures should produce stronger NC preference than more complex NP-NP ones; non-parallel structures should favor DN. B) DN readings should lead lengthened choice time vs NC C) NC and DN preference should manifest characteristically distinct intonation contours. The paper reports on the first results of this experiment. (1)a. Personne ne fait quoique ce soit. b. Personne ne fait rien. (2)a ¬∃x ∃y do(x,y) ‘No one does anything’. b.¬∃x,¬∃y do(x,y) ‘No one does nothing’. (3)a.(NOx (NOy do(x,y))) b.NO do(x,y

    When and How is Concord preferred? An Experimental approach

    No full text
    ISBN : 978-2-8399-1580-9.International audienceA longstanding debate asks whether negative polarity (1a.NPI) and negative concord (1b.NC) involve identical or distinct syntactic/semantic operations. Although French (1a&b) and cross-linguistic equivalents share the same first-order-logic interpretation (2a), disagreements remain as to how it obtains for each. Yet only (1b), ambiguously allows a double negative (DN)(2b). Taking the English paraphrase in (2b) to be likewise ambiguous, May (89) proposed that DN encodes a compositional hierarchical scope relation between its negative quantifiers (3a), while NC involves the formation of a resumptive polyadic negative quantifier (3b). Applied to French, this analysis of NC has long ranging consequences. First, NC is clearly distinguished from NPI, as n-words are cast as negative quantifiers. Second, it puts French (1b) and English (2b) under the same theoretical umbrella, questioning the validity of any NC macro-parameter. Third, how (1b) and (2b) should be distinguished arises anew, particularly if French truly favors NC, but English DN. Indeed, although the analysis elegantly allows both readings for (1b)-(2b) without lexical ambiguity, it remains surprisingly vague as to which factors favors NC over DN. Processing costs, intonation, quantifier parallelism, structural complexity, clause boundedness and discourse have all been suggested to influence DN or NC, but in effect, little is known as to how speakers resolve such ambiguities in a single language. The paper explores this question experimentally. Ambiguous sentences like (1b) were paired with two scenes, each representing one reading. Subjects were asked to read them aloud and pick the one that representing its meaning. Sound production and choice time through mouse tracking were recorded. Quantifier parallelism, structural complexity and syntactic position were manipulated to probe their effects on interpretation. The design produces experimental data on the effects of quantificational parallelism, structural complexity, syntactic position, and processing time for NC vs. DN and intonation. On a resumptive quantification analysis of NC, theoretical predictions are as follows: A) parallel simple Pro-Pro structures should produce stronger NC preference than more complex NP-NP ones; non-parallel structures should favor DN. B) DN readings should lead lengthened choice time vs NC C) NC and DN preference should manifest characteristically distinct intonation contours. The paper reports on the first results of this experiment. (1)a. Personne ne fait quoique ce soit. b. Personne ne fait rien. (2)a ¬∃x ∃y do(x,y) ‘No one does anything’. b.¬∃x,¬∃y do(x,y) ‘No one does nothing’. (3)a.(NOx (NOy do(x,y))) b.NO do(x,y

    Fast realistic modeling in bioelectromagnetism using lead-field interpolation. Hum. Brain Mapp. 14, 48–63. Conflict of Interest Statement: The authors declare that the research was conducted in the absence of any commercial or financial relationships that

    No full text
    Abstract: The practical use of realistic models in bioelectromagnetism is limited by the time-consuming amount of numerical calculations. We propose a method leading to much higher speed than currently available, and compatible with any kind of numerical methods (boundary elements (BEM), finite elements, finite differences). Illustrated with the BEM for EEG and MEG, it applies to ECG and MCG as well. The principle is two-fold. First, a Lead-Field matrix is calculated (once for all) for a grid of dipoles covering the brain volume. Second, any forward solution is interpolated from the pre-calculated Lead-Fields corresponding to grid dipoles near the source. Extrapolation is used for shallow sources falling outside the grid. Three interpolation techniques were tested: trilinear, second-order BĂ©zier (Bernstein polynomials), and 3D spline. The trilinear interpolation yielded the highest speed gain, with factors better than Ï«10,000 for a 9,000-triangle BEM model. More accurate results could be obtained with the BĂ©zier interpolation (speed gain Ïł1,000), which, combined with a 8-mm step grid, lead to intrinsic localization and orientation errors of only 0.2 mm and 0.2 degrees. Further improvements in MEG could be obtained by interpolating only the contribution of secondary currents. Cropping grids by removing shallow points lead to a much better estimation of the dipole orientation in EEG than when solving the forward problem classically, providing an efficient alternative to locally refined models. This method would show special usefulness when combining realistic models with stochastic inverse procedures (simulated annealing, genetic algorithms) requiring many forward calculations
    • 

    corecore